• 5 Votes
    40 Posts
    3k Views
    KOOLERK

    I don't know how Starwind vSAN can be run but if it's on a hypervisor it's severely limited by I/O congestion through the kernel. NVMe drives is causing problems that was of no concern whatsoever with spinners. Both KVM and Xen has made a lot of work to limit their I/O latency and use polling techniques now but it's still a problem. That's why you really need SR-IOV on NVMe drives so any VM can bypass the hypervisor and just have it's own kernel to slow things down.

    Anton: There are no problems with polling these days 🙂 You normally spawn a SPDK-enabled VM (Linux is unbeatable here as most of the new gen I/O development happens there) and pass thru RDMA-capable network hardware (virtual function with SR-IOV or whole card with PCIe pass-thru, this is really irrelevant...) and NMVe drives and... magic starts happening 🙂 This is how our NVMe-oF target works on ESXi & Hyper-V (KVM & Xen have no benefits here architecturally, this is where you're either wrong or I failed to get your arguments). It's possible to port SPDK into Windows user-mode but lack of NVMe and NIC polling drivers takes away all the fun: to move the same amount of data we normally use ~4x more CPU horsepower on "Pure Windows" Vs. "Linux-SPDK-VM-on-Windows" models. Microsoft is trying to bring SPDK to Windows kernel (so does VMware from what I know), but it needs a lot of work from NIC and NVMe engineers and... nobody wants to contribute. Really.

    Just my $0.02 🙂

  • 0 Votes
    51 Posts
    4k Views
    scottalanmillerS

    @dafyre said in How would you build a File server with 170TB of Usable Storage?:

    Systems like Ceph and Gluster don't necessarily need RAID since they can work with different disks in the same node.

    They definitely don't need RAID. That's their point.

  • Storage Setup for KVM

    IT Discussion
    11
    1 Votes
    11 Posts
    2k Views
    EddieJenningsE

    @travisdh1 said in Storage Setup for KVM:

    @emad-r said in Storage Setup for KVM:

    @eddiejennings

    keep your images and ISOs in the default location of /var/lib/libvirt/images/?

    Yes I do, but I create 2 new folders there, iso and vm.

    Fedora will be presented a 4 TB block device ?
    Why dont you separe that a little, and have more fun. Block device I assume DAS, if no why dont you make the storage reliable and robust, and make it its own server, like another fedora or centos install, with RAID 10 and the simplest option to share is NFS, and this way you can have many KVMs and the migration feature will actually work, and you can do RAID on just /var, and you scan scale easily with KVM nodes + KVM nodes can be state file, think salt stack, and you can treat them as pure compute nodes.

    Because @EddieJennings is talking about his home lab, which will consist of a single 1U server. That hadn't been mentioned in this thread.

    Bah! Folks should be able to read my mind ;). There were some good ideas in this thread though.

    What I decided on was giving enough space to / live comfortably, and gave everything else to /var.